1,101 research outputs found

    Fish fillet authentication by image analysis

    Get PDF
    The work aims at developing an image analysis procedure able to distinguish high value fillets of Atlantic cod (Gadus morhua) from those of haddock (Melanogrammus aeglefinus). The images of fresh G. morhua (n \ubc 90) and M. aeglefinus (n \ubc 91) fillets were collected by a flatbed scanner and processed at different levels. Both untreated and edge-based segmented (Canny algorithm) regions of interest were submitted to surface texture evaluation by Grey Level Co-occurrence Matrix analysis. Twelve surface texture variables selected by Principal Component Analysis or by SELECT algorithm were then used to develop Linear Discriminant Analysis models. An average correct classification rate ranging from 86.05 to 92.31% was obtained in prediction, irrespective the use of raw or segmented images. These findings pave the way for a simple machine vision system to be implemented along fish market chain, in order to provide stakeholders with a simple, rapid and cost-effective system useful in fighting commercial frauds

    Effect of fresh pork meat conditioning on quality characteristics of salami

    Get PDF
    The aim of this work was to evaluate the effect of pork meat conditioning under different relative humidity (RH) values on salami quality characteristics. During a 6 days conditioning period at 0 \ub0C under two levels of RH (95% vs. 80%), meat pH and weight loss were measured. Salami characteristics (moisture, weight loss, texture, appearance properties) were evaluated during 20 days of ripening. Results showed that conditioning at 80% RH yielded a significantly drier meat, being the weight loss rate 1.6 times higher than at 95% RH. The lower water content of meat allowed a shorter salami ripening phase, guaranteeing an appropriate weight loss and the development of the desired texture, while maintaining good appearance properties. The acceleration of this production phase represents a clear economic advantage for producers and consumers, leading to higher profit margins and lower retail prices. The possibility of using FT-NIR spectroscopy as a valid tool for the rapid evaluation of salami ripening was also demonstrated

    A computer aided diagnosis system for lung nodules detection in postero anterior chest radiographs

    Get PDF
    This thesis describes a Computer Aided System aimed at lung nodules detection. The fully automatized method developed to search for nodules is composed by four steps. They are the segmentation of the lung field, the enhancement of the image, the extraction of the candidate regions, and the selection between them of the regions with the highest chance to be True Positives. The steps of segmentation, enhancement and candidates extraction are based on multi-scale analysis. The common assumption underlying their development is that the signal representing the details to be detected by each of them (lung borders or nodule regions) is composed by a mixture of more simple signals belonging to different scales and level of details. The last step of candidate region classification is the most complicate; its 8 task is to discern among a high number of candidate regions, the few True Positives. To this aim several features and different classifiers have been investigated. In Chapter 1 the segmentation algorithm is described; the algorithm has been tested on the images of two different databases, the JSRT and the Niguarda database, both described in the next section, for a total of 409 images. We compared the results obtained with another method presented in the literature and described by Ginneken, in [85], as the one obtaining the best performance at the state of the art; it has been tested on the same images of the JSRT database. No errors have been detected in the results obtained by our method, meanwhile the one previously mentioned produced an overall number of error equal to 50. Also the results obtained on the images of the Niguarda database confirmed the efficacy of the system realized, allowing us to say that this is the best method presented so far in the literature. This sentence is based also on the fact that this is the only system tested on such an amount of images, and they are belonging to two different databases. Chapter 2 is aimed at the description of the multi-scale enhancement and the extraction methods. The enhancement allows to produce an image where the \u201cconspicuity\u201d of nodules is increased, so that nodules of different sizes and located in parts of the lungs characterized by completely different anatomic noise are more visible. Based on the same assumption the candidates extraction procedure, described in the same chapter, employs a multi-scale method to detect all the nodules of different sizes. Also this step has been compared with two methods ([8] and [1]) described in the literature and tested on the same images. Our implementation of the first one of them ([8]) produced really poor results; the second one obtained a sensitivity ratio (See Appendix C for its definition) equal to 86%. The considerably better performance of our method is proved by the fact that the sensitivity ratio we obtained is much higher (it is equal to 97%) and also the number of False positives detected is much less. The experiments aimed at the classification of the candidates are described in chapter 3; both a rule based technique and 2 learning systems, the Multi Layer Perceptron (MLP) and the Support Vector Machine (SVM), have been investigated. Their input is a set of 16 features. The rule based system obtained the best performance: the cardinality of the set of candidates left is highly reduced without lowering the sensitivity of the system, since no True Positive region is lost. It can be added that this performance is much better than the one of the system used by Ginneken and Schilam in [1], since its sensitivity is lower (equal to 77%) and the number of False Positive left is comparable. The drawback of a rule based system is the need of setting the 9 thresholds used by the rules; since they are experimentally set the system is dependent on the images used to develop it. Therefore it may happen that, on different databases, the performance could not be so good. The result of the MLPs and of the SVMs are described in detail and the ROC analysis is also reported, regarding the experiments performed with the SVMs. Furthermore, the attempt to improve the performance of the classification leaded to other experiments employing SVMs trained with more complicate feature sets. The results obtained, since not better than the previous, showed the need of a proper selection of the features. Future works will then be focused at testing other sets of features, and their combination obtained by means of proper feature selection techniques

    Intrinsic Dimension Estimation: Relevant Techniques and a Benchmark Framework

    Get PDF
    When dealing with datasets comprising high-dimensional points, it is usually advantageous to discover some data structure. A fundamental information needed to this aim is the minimum number of parameters required to describe the data while minimizing the information loss. This number, usually called intrinsic dimension, can be interpreted as the dimension of the manifold from which the input data are supposed to be drawn. Due to its usefulness in many theoretical and practical problems, in the last decades the concept of intrinsic dimension has gained considerable attention in the scientific community, motivating the large number of intrinsic dimensionality estimators proposed in the literature. However, the problem is still open since most techniques cannot efficiently deal with datasets drawn from manifolds of high intrinsic dimension and nonlinearly embedded in higher dimensional spaces. This paper surveys some of the most interesting, widespread used, and advanced state-of-the-art methodologies. Unfortunately, since no benchmark database exists in this research field, an objective comparison among different techniques is not possible. Consequently, we suggest a benchmark framework and apply it to comparatively evaluate relevant state-of-the-art estimators

    Targeting bacterial cell division: A binding site-centered approach to the most promising inhibitors of the essential protein FtsZ

    Get PDF
    Binary fission is the most common mode of bacterial cell division and is mediated by a multiprotein complex denominated the divisome. The constriction of the Z-ring splits the mother bacterial cell into two daughter cells of the same size. The Z-ring is formed by the polymerization of FtsZ, a bacterial protein homologue of eukaryotic tubulin, and it represents the first step of bacterial cytokinesis. The high grade of conservation of FtsZ in most prokaryotic organisms and its relevance in orchestrating the whole division system make this protein a fascinating target in antibiotic research. Indeed, FtsZ inhibition results in the complete blockage of the division system and, consequently, in a bacteriostatic or a bactericidal effect. Since many papers and reviews already discussed the physiology of FtsZ and its auxiliary proteins, as well as the molecular mechanisms in which they are involved, here, we focus on the discussion of the most compelling FtsZ inhibitors, classified by their main protein binding sites and following a medicinal chemistry approach

    A cockpit of multiple measures for assessing film restoration quality

    Get PDF
    In machine vision, the idea of expressing the quality of a films by a single value is very popular. Usually this value is computed by processing a set of image features with the aim of resembling as much as pos- sible a kind of human judgment of the film quality. Since human quality assessment is a complex mech- anism involving many different perceptual aspects, we believe that such approach may scarcely provide a comprehensive analysis. Especially in the field of digital movie restoration, a single score can hardly provide reliable information about the effects of the various restoring operations. For this reason we in- troduce an alternative approach, where a set of measures, describing over time basic global and local visual properties of the film frames, is computed in an unsupervised way and delivered to expert evalu- ators for checking the restoration pipeline and results. The proposed framework can be viewed as a car or airplane cockpit , whose parameters (i.e. the computed measures) are necessary to control the machine status and performance. This cockpit, which is publicly available online, would like to support the digital restoration process and its assessment

    Complex Data Imputation by Auto-Encoders and Convolutional Neural Networks—A Case Study on Genome Gap-Filling

    Get PDF
    Missing data imputation has been a hot topic in the past decade, and many state-of-the-art works have been presented to propose novel, interesting solutions that have been applied in a variety of fields. In the past decade, the successful results achieved by deep learning techniques have opened the way to their application for solving difficult problems where human skill is not able to provide a reliable solution. Not surprisingly, some deep learners, mainly exploiting encoder-decoder architectures, have also been designed and applied to the task of missing data imputation. However, most of the proposed imputation techniques have not been designed to tackle \u201ccomplex data\u201d, that is high dimensional data belonging to datasets with huge cardinality and describing complex problems. Precisely, they often need critical parameters to be manually set or exploit complex architecture and/or training phases that make their computational load impracticable. In this paper, after clustering the state-of-the-art imputation techniques into three broad categories, we briefly review the most representative methods and then describe our data imputation proposals, which exploit deep learning techniques specifically designed to handle complex data. Comparative tests on genome sequences show that our deep learning imputers outperform the state-of-the-art KNN-imputation method when filling gaps in human genome sequences

    Determination of the geographical origin of green coffee beans using NIR spectroscopy and multivariate data analysis

    Get PDF
    In this work, near infrared (NIR) spectroscopy and multivariate data analysis were investigated as a fast and non disruptive method to classify green coffee beans on continents and countries bases. FT-NIR spectra of 191 coffee samples, origin from 2 continents and 9 countries, were acquired by two different laboratories. Laboratory-independent Partial Least Square-Discriminant Analysis and interval PIS-DA models were developed by following a hierarchical approach, i.e. considering at first the continent and then the country of origin as discrimination rule. The best continent-based classification model was able to identify correctly more than 98% in prediction, whereas 100% of them were correctly predicted by the best country-based classification model. The inter-laboratory reliability of the proposed method was confirmed by McNemar test, since no significant differences (P > 0.05) were found. Furthermore, a validation was performed predicting the spectral test set of a laboratory using the model developed by the other one

    Ki67 nuclei detection and ki67-index estimation: A novel automatic approach based on human vision modeling

    Get PDF
    Background: The protein ki67 (pki67) is a marker of tumor aggressiveness, and its expression has been proven to be useful in the prognostic and predictive evaluation of several types of tumors. To numerically quantify the pki67 presence in cancerous tissue areas, pathologists generally analyze histochemical images to count the number of tumor nuclei marked for pki67. This allows estimating the ki67-index, that is the percentage of tumor nuclei positive for pki67 over all the tumor nuclei. Given the high image resolution and dimensions, its estimation by expert clinicians is particularly laborious and time consuming. Though automatic cell counting techniques have been presented so far, the problem is still open. Results: In this paper we present a novel automatic approach for the estimations of the ki67-index. The method starts by exploiting the STRESS algorithm to produce a color enhanced image where all pixels belonging to nuclei are easily identified by thresholding, and then separated into positive (i.e. pixels belonging to nuclei marked for pki67) and negative by a binary classification tree. Next, positive and negative nuclei pixels are processed separately by two multiscale procedures identifying isolated nuclei and separating adjoining nuclei. The multiscale procedures exploit two Bayesian classification trees to recognize positive and negative nuclei-shaped regions. Conclusions: The evaluation of the computed results, both through experts' visual assessments and through the comparison of the computed indexes with those of experts, proved that the prototype is promising, so that experts believe in its potential as a tool to be exploited in the clinical practice as a valid aid for clinicians estimating the ki67-index. The MATLAB source code is open source for research purposes
    • …
    corecore